15 research outputs found

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method gives stable and promising performance when compared to that of the existing methods

    Matrix Adaptive Synthesis Filter for Uniform Filter Bank

    Full text link
    In this paper, we use a matrix adaptive filter as the synthesis stage of a Uniform Filter Bank (UFB) to reconstruct the input signal. We first develop the mathematical theory behind it by applying the model of optimal filtering at the synthesis stage of the UFB and obtaining an expression for the matrix Wiener filter. We have developed a theorem which we use to simplify the expression further. In the absence of required information about the analysis stage, we use adaptive filtering to arrive at the Wiener solution. We use the Least Mean Square (LMS) algorithm to update the filter coefficients. Through experimental results, we find that the adaptive filter is convergent for a stable Wiener filter

    Multi-focus image fusion using maximum symmetric surround saliency detection

    Get PDF
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method outperforms the existing methods

    FPGA IMPLEMENTATION OF LOW COMPLEXITY LINEAR PERIODICALLY TIME VARYING FILTER

    Get PDF
    ABSTRACT This paper presents a low complexity architecture for a linear periodically time varying (LPTV) filter. This architecture is based on multi-input multi-output(MIMO) representation of LPTV filters. The input signal is divided into blocks and parallel processing is incorporated, there by considerably reducing the effective input sampling rate. A single multiplier can be shared for each linear time invariant (LTI) filter in the representation. Each LTI filter is realized in the transposed direct form filter using multiplier less multiplication structures based on Binary common bit patterns (BCS). The proposed structure is simulated, synthesized and implemented on Virtex v50efg256-7 Field Programmable Gate Array (FPGA). LPTV systems can be expressed as generalization of Linear time invariant (LTI) systems. If the input for a M-period LPTV system is delayed by M samples, output is also delayed by the same number of samples. An LPTV system with a period of '1' is nothing but an LTI syste

    Cyclostationary signals in multirate linear systems

    No full text
    Abstract-This paper presents a systematic approach to use time frequency representation (TFR) for the analysis of cyclostationary signals in multirate linear systems. We exploit the ability of blocking operation to perform the analysis in a simple yet efficient manner. We illustrate the strength of TFR for performing such analysis. The basic idea is -TFR of a cyclostationary signal is directly related to the rows of the power spectrum matrix of its blocked version. We present examples including nonuniform filter bank to highlight the capabilities of the analysis

    Multi-focus image fusion using maximum symmetric surround saliency detection

    No full text
    In digital photography, two or more objects of a scene cannot be focused at the same time. If we focus one object, we may lose information about other objects and vice versa. Multi-focus image fusion is a process of generating an all-in-focus image from several out-of-focus images. In this paper, we propose a new multi-focus image fusion method based on two-scale image decomposition and saliency detection using maximum symmetric surround. This method is very beneficial because the saliency map used in this method can highlight the saliency information present in the source images with well defined boundaries. A weight map construction method based on saliency information is developed in this paper. This weight map can identify the focus and defocus regions present in the image very well. So we implemented a new fusion algorithm based on weight map which integrate only focused region information into the fused image. Unlike multi-scale image fusion methods, in this method two-scale image decomposition is sufficient. So, it is computationally efficient. Proposed method is tested on several multi-focus image datasets and it is compared with traditional and recently proposed fusion methods using various fusion metrics. Results justify that our proposed method outperforms the existing methods

    A Novel Lightweight CNN Architecture for the Diagnosis of Brain Tumors Using MR Images

    No full text
    Over the last few years, brain tumor-related clinical cases have increased substantially, particularly in adults, due to environmental and genetic factors. If they are unidentified in the early stages, there is a risk of severe medical complications, including death. So, early diagnosis of brain tumors plays a vital role in treatment planning and improving a patient’s condition. There are different forms, properties, and treatments of brain tumors. Among them, manual identification and classification of brain tumors are complex, time-demanding, and sensitive to error. Based on these observations, we developed an automated methodology for detecting and classifying brain tumors using the magnetic resonance (MR) imaging modality. The proposed work includes three phases: pre-processing, classification, and segmentation. In the pre-processing, we started with the skull-stripping process through morphological and thresholding operations to eliminate non-brain matters such as skin, muscle, fat, and eyeballs. Then we employed image data augmentation to improve the model accuracy by minimizing the overfitting. Later in the classification phase, we developed a novel lightweight convolutional neural network (lightweight CNN) model to extract features from skull-free augmented brain MR images and then classify them as normal and abnormal. Finally, we obtained infected tumor regions from the brain MR images in the segmentation phase using a fast-linking modified spiking cortical model (FL-MSCM). Based on this sequence of operations, our framework achieved 99.58% classification accuracy and 95.7% of dice similarity coefficient (DSC). The experimental results illustrate the efficiency of the proposed framework and its appreciable performance compared to the existing techniques

    Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform

    No full text

    Analysis of Signals via Non-Maximally Decimated Non-Uniform Filter Banks

    No full text
    corecore